Sparse Representations Are Most Likely to Be the Sparsest Possible
نویسنده
چکیده
Given a signal S ∈ RN and a full rank matrix D ∈ RN×L with N < L, we define the signal’s overcomplete representations as all α ∈ RL satisfying S = Dα. Among all the possible solutions, we have special interest in the sparsest one – the one minimizing ‖α‖0. Previous work has established that a representation is unique if it is sparse enough, requiring ‖α‖0 < Spark(D)/2. The measure Spark(D) stands for the minimal number of columns from D that are linearly dependent. This bound is tight – examples can be constructed to show that with Spark(D)/2 or more non-zero entries, uniqueness is violated. In this paper we study the behavior of overcomplete representations beyond the above bound. While tight from a worst-case standpoint, a probabilistic point-of-view leads to uniqueness of representations satisfying ‖α‖0 < Spark(D). Furthermore, we show that even beyond this point, uniqueness can still be claimed with high confidence. This new result is important for the study of the average performance of pursuit algorithms – when trying to show an equivalence between the pursuit result and the ideal solution, one must also guarantee that the ideal result is indeed the sparsest.
منابع مشابه
A Weighted Average of Sparse Representations is Better than the Sparsest One Alone
Cleaning of noise from signals is a classical and long-studied problem in signal processing. Algorithms for this task necessarily rely on an a-priori knowledge about the signal characteristics, along with information about the noise properties. For signals that admit sparse representations over a known dictionary, a commonly used denoising technique is to seek the sparsest representation that s...
متن کاملImage Denoising Using Sparse Representations
The problem of removing white zero-mean Gaussian noise from an image is an interesting inverse problem to be investigated in this paper through sparse and redundant representations. However, finding the sparsest possible solution in the noise scenario was of great debate among the researchers. In this paper we make use of new approach to solve this problem and show that it is comparable with th...
متن کاملTheoretical Results About Finding the Sparsest Representations of Multiple Measurement Vectors (MMV) in an Over-complete Dictionary, Using `1-Norm Minimization and Greedy Algorithms
Multiple Measurement Vectors (MMV) is a newly emerged problem in sparse over-complete representation. Efficient methods have been designed. Considering many theoretical results that are available in a simple case— Single Measure Vector (SMV)—the theoretical analysis regarding MMV is lacking. In this paper, some known results of SMV are generalized to MMVs; new results particularly for MMV are a...
متن کاملRecovering non-negative and combined sparse representations
The non-negative solution to an underdetermined linear system can be uniquely recovered sometimes, even without imposing any additional sparsity constraints. In this paper, we derive conditions under which a unique non-negative solution for such a system can exist, based on the theory of polytopes. Furthermore, we develop the paradigm of combined sparse representations, where only a part of the...
متن کاملHighly sparse representations from dictionaries are unique and independent of the sparseness measure by
The purpose of this paper is to study sparse representations of signals from a general dictionary in a Banach space. For so-called localized frames in Hilbert spaces, the canonical frame coefficients are shown to provide a near sparsest expansion for several sparseness measures. However, for frames which are not localized, this no longer holds true and sparse representations may depend strongly...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- EURASIP J. Adv. Sig. Proc.
دوره 2006 شماره
صفحات -
تاریخ انتشار 2006